Transfer bounds for linear feature learning
نویسندگان
چکیده
منابع مشابه
Feature Selection for Transfer Learning
Common assumption in most machine learning algorithms is that, labeled (source) data and unlabeled (target) data are sampled from the same distribution. However, many real world tasks violate this assumption: in temporal domains, feature distributions may vary over time, clinical studies may have sampling bias, or sometimes sufficient labeled data for the domain of interest does not exist, and ...
متن کاملFeature Design for Transfer Learning
Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution and labeled using the same function. However, often we have labeled data for a task related to the target task but not for the target task itself. Under what conditions does a good classifier for the related task transfer to the target task? Feature representations th...
متن کاملFeature Selection by Transfer Learning with Linear Regularized Models
This paper presents a novel feature selection method for classification of high dimensional data, such as those produced by microarrays. It includes a partial supervision to smoothly favor the selection of some dimensions (genes) on a new dataset to be classified. The dimensions to be favored are previously selected from similar datasets in large microarray databases, hence performing inductive...
متن کاملGeneralization Bounds for Linear Learning Algorithms
We study generalization properties of linear learning algorithms and develop a data dependent approach that is used to derive generalization bounds that depend on the margin distribution. Our method makes use of random projection techniques to allow the use of existing VC dimension bounds in the effective, lower, dimension of the data. Comparisons with existing generalization bound show that ou...
متن کاملBounds for Linear Multi-Task Learning
We give dimension-free and data-dependent bounds for linear multi-task learning where a common linear operator is chosen to preprocess data for a vector of task specific linear-thresholding classifiers. The complexity penalty of multi-task learning is bounded by a simple expression involving the margins of the task-specific classifiers, the Hilbert-Schmidt norm of the selected preprocessor and ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Learning
سال: 2009
ISSN: 0885-6125,1573-0565
DOI: 10.1007/s10994-009-5109-7